Following the outbreak of a global pandemic, online content is filled with hate speech. Donald Trump's ''Chinese Virus'' tweet shifted the blame for the spread of the Covid-19 virus to China and the Chinese people, which triggered a new round of anti-China hate both online and offline. This research intends to examine China-related hate speech on Twitter during the two years following the burst of the pandemic (2020 and 2021). Through Twitter's API, in total 2,172,333 tweets hashtagged #china posted during the time were collected. By employing multiple state-of-the-art pretrained language models for hate speech detection, we identify a wide range of hate of various types, resulting in an automatically labeled anti-China hate speech dataset. We identify a hateful rate in #china tweets of 2.5% in 2020 and 1.9% in 2021. This is well above the average rate of online hate speech on Twitter at 0.6% identified in Gao et al., 2017. We further analyzed the longitudinal development of #china tweets and those identified as hateful in 2020 and 2021 through visualizing the daily number and hate rate over the two years. Our keyword analysis of hate speech in #china tweets reveals the most frequently mentioned terms in the hateful #china tweets, which can be used for further social science studies.
translated by 谷歌翻译
Deep Neural Networks (DNN) are increasingly used as components of larger software systems that need to process complex data, such as images, written texts, audio/video signals. DNN predictions cannot be assumed to be always correct for several reasons, among which the huge input space that is dealt with, the ambiguity of some inputs data, as well as the intrinsic properties of learning algorithms, which can provide only statistical warranties. Hence, developers have to cope with some residual error probability. An architectural pattern commonly adopted to manage failure-prone components is the supervisor, an additional component that can estimate the reliability of the predictions made by untrusted (e.g., DNN) components and can activate an automated healing procedure when these are likely to fail, ensuring that the Deep Learning based System (DLS) does not cause damages, despite its main functionality being suspended. In this paper, we consider DLS that implement a supervisor by means of uncertainty estimation. After overviewing the main approaches to uncertainty estimation and discussing their pros and cons, we motivate the need for a specific empirical assessment method that can deal with the experimental setting in which supervisors are used, where the accuracy of the DNN matters only as long as the supervisor lets the DLS continue to operate. Then we present a large empirical study conducted to compare the alternative approaches to uncertainty estimation. We distilled a set of guidelines for developers that are useful to incorporate a supervisor based on uncertainty monitoring into a DLS.
translated by 谷歌翻译
Different types of mental rotation tests have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image. We explore a controlled setting whereby questions are posed about the properties of a scene if that scene was observed from another viewpoint. To do this we have created a new version of the CLEVR dataset that we call CLEVR Mental Rotation Tests (CLEVR-MRT). Using CLEVR-MRT we examine standard methods, show how they fall short, then explore novel neural architectures that involve inferring volumetric representations of a scene. These volumes can be manipulated via camera-conditioned transformations to answer the question. We examine the efficacy of different model variants through rigorous ablations and demonstrate the efficacy of volumetric representations.
translated by 谷歌翻译
Micron-scale robots (ubots) have recently shown great promise for emerging medical applications, and accurate control of ubots is a critical next step to deploying them in real systems. In this work, we develop the idea of a nonlinear mismatch controller to compensate for the mismatch between the disturbed unicycle model of a rolling ubot and trajectory data collected during an experiment. We exploit the differential flatness property of the rolling ubot model to generate a mapping from the desired state trajectory to nominal control actions. Due to model mismatch and parameter estimation error, the nominal control actions will not exactly reproduce the desired state trajectory. We employ a Gaussian Process (GP) to learn the model mismatch as a function of the desired control actions, and correct the nominal control actions using a least-squares optimization. We demonstrate the performance of our online learning algorithm in simulation, where we show that the model mismatch makes some desired states unreachable. Finally, we validate our approach in an experiment and show that the error metrics are reduced by up to 40%.
translated by 谷歌翻译
Geospatial Information Systems are used by researchers and Humanitarian Assistance and Disaster Response (HADR) practitioners to support a wide variety of important applications. However, collaboration between these actors is difficult due to the heterogeneous nature of geospatial data modalities (e.g., multi-spectral images of various resolutions, timeseries, weather data) and diversity of tasks (e.g., regression of human activity indicators or detecting forest fires). In this work, we present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias, pre-trained on large amounts of unlabelled earth observation data in a self-supervised manner. We envision how such a model may facilitate cooperation between members of the community. We show preliminary results on the first step of the roadmap, where we instantiate an architecture that can process a wide variety of geospatial data modalities and demonstrate that it can achieve competitive performance with domain-specific architectures on tasks relating to the U.N.'s Sustainable Development Goals.
translated by 谷歌翻译
用于在线状态估计的随机过滤器是自治系统的核心技术。此类过滤器的性能是系统能力的关键限制因素之一。此类过滤器的渐近行为(例如,用于常规操作)和瞬态响应(例如,对于快速初始化和重置)对于保证自主系统的稳健操作至关重要。本文使用n个方向测量值(包括车身框架和参考框架方向类型测量值)引入了陀螺仪辅助姿态估计器的新通用公式。该方法基于一种集成状态公式,该公式结合了导航,所有方向传感器的外部校准以及在单个模棱两可的几何结构中的陀螺式偏置状态。这种新提出的对称性允许模块化的不同方向测量及其外部校准,同时保持在同一对称性中包括偏置态的能力。随后使用此对称性的基于滤波器的估计量明显改善了瞬态响应,与最新方法相比,渐近偏置和外部校准估计。估计器在统计代表性的模拟中得到了验证,并在现实世界实验中进行了测试。
translated by 谷歌翻译
我们通过查看在弥漫表面上铸造的对象的阴影来研究个体的生物特征识别信息的问题。我们表明,通过最大似然分析,在代表性的情况下,阴影中的生物特征信息泄漏可以足够用于可靠的身份推断。然后,我们开发了一种基于学习的方法,该方法在实际设置中证明了这种现象,从而利用阴影中的微妙提示是泄漏的来源,而无需任何标记的真实数据。特别是,我们的方法依赖于构建由从每个身份的单个照片获得的3D面模型组成的合成场景。我们以完全无监督的方式将我们从合成数据中学到的知识转移到真实数据中。我们的模型能够很好地概括到真实的域,并且在场景中的几种变体都有坚固的范围。我们报告在具有未知几何形状和遮挡对象的场景中发生的身份分类任务中的高分类精度。
translated by 谷歌翻译
在本文中,我们将预处理技术应用于具有不同长度的多通道时间序列数据,我们称之为对齐问题,用于下游机器学习。多种原因可能发生多种渠道时间序列数据的未对准,原因有多种原因,例如丢失的数据,变化的采样率或不一致的收集时间。我们考虑从MIT SuperCloud高性能计算(HPC)中心收集的多渠道时间序列数据,其中不同的工作开始时间和HPC作业的运行时间不同,导致数据不对准。这种未对准使得为计算工作负载分类等任务构建AI/ML方法具有挑战性。在先前使用MIT SuperCloud数据集的监督分类工作的基础上,我们通过三种宽阔的低间接空间方法解决了对齐问题:从全职系列中抽样固定子集,在全职系列上执行摘要统计信息,并对系数进行取样。从映射到频域的时间序列。我们最佳性能模型的分类精度大于95%,以先前的方法对MIT SuperCloud数据集的多通道时间序列分类的表现优于5%。这些结果表明,我们的低间接费用方法与标准机器学习技术结合使用,能够达到高水平的分类准确性,并作为解决对齐问题(例如内核方法)的未来方法的基准。
translated by 谷歌翻译
我们研究了数据驱动的深度学习方法的潜力,即从观察它们的混合物中分离两个通信信号。特别是,我们假设一个信号之一的生成过程(称为感兴趣的信号(SOI)),并且对第二个信号的生成过程不了解,称为干扰。单通道源分离问题的这种形式也称为干扰拒绝。我们表明,捕获高分辨率的时间结构(非平稳性),可以准确地同步与SOI和干扰,从而带来了可观的性能增长。有了这个关键的见解,我们提出了一种域信息神经网络(NN)设计,该设计能够改善“现成” NNS和经典检测和干扰拒绝方法,如我们的模拟中所示。我们的发现突出了特定于交流领域知识的关键作用在开发数据驱动的方法方面发挥了作用,这些方法具有前所未有的收益的希望。
translated by 谷歌翻译
随着COVID-19现在普遍存在,对高危个体的识别至关重要。利用来自宾夕法尼亚州西南部主要医疗保健提供者的数据,我们开发了预测严重Covid-19进展的生存模型。在这项工作中,我们在依赖许多功能的更准确模型和依赖一些与临床医生直觉相一致的功能的模型之间面临一个权衡。使事情变得复杂,许多EHR功能往往较低,从而降低了较小模型的准确性。在这项研究中,我们开发了两组高性能风险评分:(i)由所有可用功能构建的无约束模型;(ii)在训练风险预测因子之前,在培训风险预测因子之前就学习一小部分临床概念的管道。学到的概念提高了相应特征(C-Index 0.858 vs. 0.844)的性能,并在评估样本外(随后的时间段)时证明了(i)的改进。我们的模型表现优于先前的工作(C-Index 0.844-0.872 vs. 0.598-0.810)。
translated by 谷歌翻译